Before You Adopt GenAI
Canadians are being asked to adopt Generative AI as part of an economic strategy promoted by our federal government in partnership with the tech community. While economic growth is the focus of this strategy, the social, individual, and environmental risks of adoption are not widely discussed. Educators are often pressured to embrace Generative AI without appropriate time to critically assess its broad context and potential impacts. In the meantime, students are eagerly adopting platforms such as ChatGPT without carefully considering the many possible consequences. This document is designed to provide students, instructors, researchers, and administrators with a quick reference guide that can inform a principled, well-contextualized, and critically sound adoption of Generative AI.
Image credit: ChatGPT 5. Prompt: “create an image of an academic ‘adopting’ an AI.” Note: This image may serve as a case study of systemic bias embedded in LLM outputs.

How to Use this Document
This document is concerned primarily with Generative AI (GenAI), or more specifically, Large Language Models (LLMs) and the platforms that make them accessible such as ChatGPT, Claude LLM, DeepSeek, Gemini, and so on. For definitions of terms related to Generative AI, consult the glossaries provided by the University of British Columbia, the Massachusetts Institute of Technology, or other reliable sources.
Students
For students, this document will help you make an informed choice about how to adopt generative AI tools (or not) while maintaining your personal and academic integrity. This also provides information about how Generative AI could impact your future career, including how your adoption of LLM’s may impact cognitive and communication skills sought after by employers.
Instructors
For instructors, this document can be used to spark class discussions and inspire course assignments. It can also be used to create a Generative AI course policy, which can be co-created with students. Discussing these topics in class can shift the focus on assessment from the policing of LLM use to the development of a student’s ethos.
Researchers
For researchers, this document provides resources to contextualize the ethical implications of using LLM’s in your work. Professional organizations and individual journals often provide their own guidance on the use of Generative AI. Transparency about the use of these tools in research is currently the most common policy.
Administrators
Administrators can consult this document to make informed and responsible choices that will guide principled and well-contextualized policies about the use of Generative AI at their institutions. The issues discussed here should not be ignored in the pursuit of economic, corporate partnership, or the fear of being viewed as “anti-innovative.”
Issues, Concerns, and Considerations
These conversations are ongoing and constantly evolving. This is a living document and subject to change as new research is developed.
Academic Integrity
Every university promotes an ethical and moral code based on academic integrity. At the University of Waterloo, the code is rooted in the values of honesty, trust, fairness, respect and responsibility. These values can be threatened by the use of LLM’s. Using text, code or images from ChatGPT in an assignment without attributing their source is dishonest (the work is not yours), unfair (students who did not use LLM’s may have worked harder), and can erode trust between students and between students and instructors.
Further Readings
American Psychological Association. (2023, November). The APA Journals Policy on Generative AI. https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai
Prada, L. (2025, May 9). Students Are Telling Chatbots To Toss in a Few Typos to Fool Teachers. Vice. https://www.vice.com/en/article/students-are-telling-chatbots-to-toss-in-a-few-typos-to-fool-teachers/
University of Waterloo. (n.d.) Academic Misconduct. Academic Integrity. https://uwaterloo.ca/academic-integrity/academic-misconduct
White, R. (2025) Advice for Researchers on the Use of Generative AI to Produce Journal Articles. JDS Communications, 6(3), pp. 452-457. https://doi.org/10.3168/jdsc.2024-0707
Zimmerman, J. (2023, August 29). Opinion: Here’s my AI policy for students: I don’t have one. The Washington Post. Reposted at Can Literature Save the Environment? https://academics.skidmore.edu/blogs/ssp016site-fall2023/thoughts-on-generative-ai/generative-ai-and-writing/heres-my-ai-policy-for-students-i-dont-have-one-professor-jonathan-zimmerman-the-university-of-pennsylvania/
Bias Reinforcement
Because LLM’s are trained on data that has been socially produced, these systems can perpetuate biases that exist within society. For example, early LLM’s produced mostly male figures when promoted to generate the image of a doctor, lawyer or professor. And hiring platforms that use LLM’s can be informed by racial stereotypes. Moreover, attempts to correct gender and race bias in LLM’s can superficially mask deeper prejudices that continue to exist deep within the LLM.
Further Readings
An, J., Huang, D., Lin, C., & Tai, M. (2025, March 12). Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation. PNAS nexus, 4(3), pgaf089. https://doi.org/10.1093/pnasnexus/pgaf089
Hofmann, V., Kalluri, P.R., Jurafsky, D., & King, S. (2024, August 28). AI generates covertly racist decisions about people based on their dialect. Nature, 633, 147–154. https://doi.org/10.1038/s41586-024-07856-5
Nyarko, J., & Schreiber, M. (2024, March 19). SLS’s Julian Nyarko on Why Large Language Models Like ChatGPT Treat Black- and White-Sounding Names DifferentlyStanford Law School. Stanford Law School Blogs. https://law.stanford.edu/2024/03/19/slss-julian-nyarko-on-why-large-language-models-like-chatgpt-treat-black-and-white-sounding-names-differently/
UNESCO. (2024, March 7). Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes
Zack, T., Lehman, E., Suzgun, M., Rodriguez, J. A., Celi, L. A., Gichoya, J., Jurafsky, D., Szolovits, P., Bates, D. W., Abdulnour, R.-E. E., Butte, A. J., & Alsentzer, E. (2024). Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: A model evaluation study. The Lancet Digital Health, 6(1), e12–e22. https://doi.org/10.1016/S2589-7500(23)00225-X
Cognition
Like most technological innovations, Generative AI is a form of automation. Whereas industrial robots automated the assembly of automobiles, for example, Generative AI platforms can automate the writing of essays, the coding of software, and the design of images. In educational settings, if the automation of these tasks is adopted passively, this may short-circuit learning processes that are essential to a full and sustainable education. These processes include critical thinking, information processing, independent idea generation. Without specific contexts and guidelines for the educational use of LLM platforms, students may resort to a passive use of these tools and develop a cognitive dependence that undermines both the short-term and long-term goals of education.
Further Readings
Chow, A. R. (2025, June 17). ChatGPT’s Impact On Our Brains According to an MIT Study. TIME. https://time.com/7295195/ai-chatgpt-google-learning-school/
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (arXiv:2506.08872). arXiv. https://doi.org/10.48550/arXiv.2506.08872
Morrison, A. (2023). Meta-Writing: AI and Writing. Composition Studies, 51(1), 155–162. https://link-gale-com.proxy.lib.uwaterloo.ca/apps/doc/A759155199/LitRC?u=uniwater&sid=summon&xid=bd253930
Parrish, A. (2021, August, 13). Language Models Can Only Write Poetry. Allison Posts. https://posts.decontextualize.com/language-models-poetry/
Pierce, D. (2024, February 28). From Eliza to ChatGPT: why people spent 60 years building chatbots. The Verge. https://www.theverge.com/24054603/chatbot-chatgpt-eliza-history-ai-assistants-video
Suriano, R., Plebe, A., Acciai, A., & Fabio, R. A. (2025). Student interaction with ChatGPT can promote complex critical thinking skills. Learning and Instruction, 95, 102011. https://doi.org/10.1016/j.learninstruc.2024.102011
Tarnoff, B. (2023, July 25). Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI. The Guardian. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
Copyright and Data Sovereignty
One of most widely publicized criticisms of LLM’s is their unauthorized use of data “scraped” from the Internet or accessed from datasets that were collected without the consent of artists, writers, coders, or cultural groups. In response, the term “data sovereignty” has been coined to identify the right of an individual or group to oversee the use of data they generate either intentionally or passively. Questions of copyright and ownership are difficult to manage since LLM’s can generate deepfakes, design “in the style” of content creators, replicate code that may have been used in patent-protected applications, and appropriate sensitive cultural information from groups that have been subjected to colonialism.
Further Readings
Bhattacharjee, R. (2024, June 18). Indigenous data stewardship stands against extractivist AI. UBC Faculty of Arts. https://www.arts.ubc.ca/news/indigenous-data-stewardship-stands-against-extractivist-ai/
Chayka, K. (2023, February 10). Is A.I. Art Stealing from Artists? The New Yorker. https://www.newyorker.com/culture/infinite-scroll/is-ai-art-stealing-from-artists
Glynn, P. (2025, February 25). Artists release silent album in protest against AI using their work. BBC News. https://www.bbc.com/news/articles/cwyd3r62kp5o
Karadeglija, A. (2025, July 19). Ottawa weighs plans on AI, copyright as OpenAI fights Ontario court jurisdiction. CBC News. https://www.cbc.ca/news/politics/ottawa-weighs-plans-on-ai-copyright-as-openai-fights-ontario-court-jurisdiction-1.7589354
Lindau, B. (2023, April 6). ChatGPT: Who Owns the Content Generated? Caldwell Law. https://caldwelllaw.com/news/chatgpt-who-owns-the-content-generated/
SOA Policy Team. (2025, March 21). The LibGen data set – what authors can do. The Society of Authors. https://societyofauthors.org/2025/03/21/the-libgen-data-set-what-authors-can-do/
Wong, S. (2023, May 17). The Origin of Clouds. Logic(s), 19. https://logicmag.io/supa-dupa-skies/the-origin-of-clouds/
Disinformation
Generative AI systems can spread misinformation that is contained in the data sets on which it is trained. Large Language Models do not have a model of truth and thus invent or “hallucinate” outputs that look similar to their training data but that are false. For example, AI-generated research papers often include incorrect citations, or titles of books and papers that do not exist.
Further Readings
Kelee, M. (2025, May 5). People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies. Rolling Stone. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
McMahon, L., & Kleinman, Z. (2024, May 9). Glue pizza and eat rocks: Google AI search errors go viral. BBC News. https://www.bbc.com/news/articles/cd11gzejgz4o
Stokel-Walker, C. (2024, May 1). How AI Chatbots are Infiltrating Scientific Publishing. Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/
Environmental Impacts
The creation, maintenance, and use of Generative AI systems cause extensive environmental impacts that have been well-documented. These include the energy required to power data centers, the water needed to cool them, and the minerals and other raw materials mined for the fabrication of computing systems. These extractive systems are mostly invisible to North American users off LLM’s, and their strategic locations can have a disproportionate impact on lower-income and racialized communities.
Further Readings
Crawford, K., & Joler, V. (2018). Anatomy of an AI System. https://anatomyof.ai/
Fleury, M., & Jimenez, N. (2025, July 10). ‘I can’t drink the water’ – life next to a US data centre. BBC. https://www.bbc.com/news/articles/cy8gy7lv448o
Ren, S., & Wierman, A. (2024, July 15). The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Job Automation
Generative AI has been positioned as a revolutionary force in the labour market. This will involve the automation or near-automation of many jobs. This could disrupt the most common career paths of students, ranging from content writing and graphic design to web and software development. While students are often advised to embrace Generative AI to secure their future careers, there is evidence that traditional educational outcomes such critical thinking and written and verbal communication are the skills most desired in entry-level workers.
Further Readings
Blake, S. (2025, January 23). Employers Would Rather Hire AI Than Gen Z Graduates: Report. Newsweek. https://www.newsweek.com/employers-would-rather-hire-ai-then-gen-z-graduates-report-2019314
Fisher, S. (2023, September 26). Hollywood writers’ contract deal includes historic AI rules. Axios. https://www.axios.com/2023/09/27/ai-wga-hollywood-writers-contract
Fitzgerald, J. (2025, June 24). Why Soft Skills Still Matter in the Age of AI. Working Knowledge: Harvard Business School. https://www.library.hbs.edu/working-knowledge/why-soft-skills-still-matter-in-the-age-of-ai
Kay, G. (2023, April 29). Software engineers are panicking about being replaced by AI. Business Insider. https://www.businessinsider.com/software-engineers-tech-panicking-golden-age-over-chatgpt-ai-blind-2023-4
Koetsier, J.(2025, August 26). AI Kills Jobs, Stanford Study Finds, Especially For Young People. Forbes. https://www.forbes.com/sites/johnkoetsier/2025/08/26/ai-kills-jobs-says-stanford-study-at-least-in-these-circumstances/
One in four jobs at risk of being transformed by GenAI, new ILO–NASK Global Index shows. (2025, May 20). International Labour Organization. https://www.ilo.org/resource/news/one-four-jobs-risk-being-transformed-genai-new-ilo%E2%80%93nask-global-index-shows
Mental Health
Chatbots have been adopted in many professional contexts, from online shopping to medical office scheduling. In education, chatbots have been pitched as a powerful tool to provide tutoring and mentorship for students. Little research has been conducted on how this form of companionship impacts mental health. For example, research shows that chatbot use can increase depression among susceptible students and impact peer relationships. Moreover, dependence on chatbots for mentorship can impact healthy intergenerational relationships. While chatbots can provide efficiencies by filling in for human tutors and mentors, their lack of emotional intelligence means they should be used with caution in educational settings.
Further Readings
Campbell, I. C. (2025, March 21). Joint studies from OpenAI and MIT found links between loneliness and ChatGPT use. Engadget. https://www.engadget.com/ai/joint-studies-from-openai-and-mit-found-links-between-loneliness-and-chatgpt-use-193537421.html
Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study (arXiv:2503.17473). arXiv. https://doi.org/10.48550/arXiv.2503.17473
McBain, R. K., Cantor, J. H., Zhang, L. A., Baker, O., Zhang, F., Burnett, A., Kofner, A., Breslau, J., Stein, B. D., Mehrotra, A., & Yu, H. (2025). Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment. Psychiatric Services, appi.ps.20250086. https://doi.org/10.1176/appi.ps.20250086
Obradovich, N., Khalsa, S. S., Khan, W. U., Suh, J., Perlis, R. H., Ajilore, O., & Paulus, M. P. (2024). Opportunities and risks of large language models in psychiatry. NPP—Digital Psychiatry and Neuroscience, 2(1), 8. https://doi.org/10.1038/s44277-024-00010-z
Rhodes, J. (n.d.). What’s lost when young people turn to AI for support instead of real people? The Chronicle of Evidence-Based Mentoring. https://www.evidencebasedmentoring.org/whats-lost-when-young-people-turn-to-ai-for-support-instead-of-real-people/
Yousif, N. (2025, August 27). Parents of teenager who took his own life sue OpenAI. BBC News. https://www.bbc.com/news/articles/cgerwp7rdlvo
Privacy and Security
LLM’s are trained on data supplied by humans, and the content of datasets is often hidden for the sake of corporate secrecy. This raises several questions related to privacy and security in the use of LLM’s. For example, what happens if private information you provide in a prompt makes its way into a dataset that generates public outputs? Can your LLM prompts be subpoenaed by a court of law? These questions also prompt questions about “data sovereignty” and the problematic lack of transparency provided by the makers of LLM platforms.
Further Readings
Embry, S. (2025, July 30). Sam Altman’s Warning: Everything You Tell ChatGPT Could End Up Being Used Against You. TechLaw Crossroads. https://www.techlawcrossroads.com/2025/07/sam-altmans-warning-everything-you-tell-chatgpt-could-end-up-being-used-against-you/
Gal, U. (2023, February 7). ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. The Conversation. https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to–be-concerned-199283
Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11), 103364. https://doi.org/10.1016/j.isci.2021.103364
Miller, K. (2024, March 18.) Privacy in an AI Era: How Do We Protect Our Personal Information? Stanford University Human-Centered AI. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information
Toesland, F. (2025, July 31). Microsoft Confirms It Cannot Ensure Data Sovereignty in the European Union. Tech Radar. https://www.techradar.com/pro/microsoft-admits-it-would-have-to-let-trump-spy-on-eu-data-if-demanded
Worker Exploitation
It is well known that the North-American tech economy is built on less costly labour in other countries, at times under exploitative conditions. The term “digital colonialism” has been coined to refer to this problem. Moreover, Generative AI platforms would not function without not just free data from users but also the free labour they provide in training LLM’s. While tech workers in Africa, for example, are seeking to unionize in response to unfair working conditions, it is up to the general user to decide whether their use of LLM platforms can be seen as a form of unfair extraction.
Further Readings
Anwar, A. (2025, March 31). Africa’s data workers are being exploited by foreign tech firms – 4 ways to protect them. The Conversation. https://theconversation.com/africas-data-workers-are-being-exploited-by-foreign-tech-firms-4-ways-to-protect-them-252957
Couldry, N., & Mejias, U. A. (2018). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media, 20(4), 336-349. https://doi-org.proxy.lib.uwaterloo.ca/10.1177/1527476418796632
Heikkiläarchive, M. (2023, June 13). We are all AI’s free data workers. MIT Technology Review. https://www.technologyreview.com/2023/06/13/1074560/we-are-all-ais-free-data-workers/
Perrigo, B. (2023, May 1). 150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting. TIME. https://time.com/6275995/chatgpt-facebook-african-workers-union/
Rani, U., & Dhir, R. K. (2024, December 10). The Artificial Intelligence illusion: How invisible workers fuel the “automated” economy. International Labour Organization. https://www.ilo.org/resource/article/artificial-intelligence-illusion-how-invisible-workers-fuel-automated
